Groups of related PMIDs may be named by naming a non-leaf node in the
PMNS tree, e.g. _i_r_i_x._d_i_s_k.
Use ppppmmmmiiiinnnnffffoooo(1), or the Metric Selection browser within ppppmmmmcccchhhhaaaarrrrtttt(1) to
explore the PMNS.
There may be PMIDs with no associated name in a PMNS; this is most likely
to occur when specific PMIDs are not available in all systems, e.g. if
Informix is not installed on a system, there is no good reason to pollute
the PMNS with names for all of the Informix performance metrics.
Applications which use the PMAPI may have independent versions of a PMNS,
constructed from an initialization file when the application starts; see
ppppmmmmLLLLooooaaaaddddAAAASSSSCCCCIIIIIIIINNNNaaaammmmeeeeSSSSppppaaaacccceeee(3), ppppmmmmLLLLooooaaaaddddNNNNaaaammmmeeeeSSSSppppaaaacccceeee(3), ppppmmmmnnnnssssccccoooommmmpppp(1) and ppppmmmmnnnnssss(4).
Not all PMIDs need be represented in the PMNS of every application. For
example, an application which monitors disk traffic will likely use a
name space which references only the PMIDs for I/O statistics. Note also
that there is no requirement for the PMNS to be the same on all systems,
however in practice most applications would be developed against a stable
PMNS that was assumed to be the same on all systems. Indeed the PCP
distribution includes a default PMNS for just this purpose.
The default PMNS is located at /_v_a_r/_p_c_p/_p_m_n_s/_r_o_o_t for base PCP
installations, however the environment variable PPPPMMMMNNNNSSSS____DDDDEEEEFFFFAAAAUUUULLLLTTTT may be set
to the full pathname of a different PMNS which will then be used as the
default PMNS.
Other complete versions of the PMNS suitable for assorted variants on the
base PCP installation may be found in the files /_v_a_r/_p_c_p/_p_m_n_s/_r_o_o_t_*.
For some PCP deployments on non-SGI platforms, the PMNS may be different,
and in some cases very diffent. Where possible, a hybrid PMNS that
provides equivalence mappings between the IRIX names and the non-SGI
PMIDs may be found in the files /_v_a_r/_p_c_p/_p_m_n_s/_e_q_u_i_v_*.
Internally (below the PMAPI) the implementation of the Performance
Metrics Collection System (PMCS) uses only the PMIDs, and a PMNS provides
an external mapping from a hierarchic taxonomy of names to PMIDs that is
convenient in the context of a particular system or particular use of the
PMAPI. For the applications programmer, the routines ppppmmmmLLLLooooooookkkkuuuuppppNNNNaaaammmmeeee(3) and
ppppmmmmNNNNaaaammmmeeeeIIIIDDDD(3) translate between names in a PMNS and PMIDs, and vice versa.
The PMNS may be traversed using ppppmmmmGGGGeeeettttCCCChhhhiiiillllddddrrrreeeennnn(3).
PPPPMMMMAAAAPPPPIIII CCCCOOOONNNNTTTTEEEEXXXXTTTT
An application using the PMAPI may manipulate several concurrent
contexts, each associated with a source of performance metrics, e.g.
ppppmmmmccccdddd(1) on some host, or an archive log of performance metrics as created
When performance metric values are returned across the PMAPI to a
requesting application, there may be more than one value for a particular
metric. Multiple values, or iiiinnnnssssttttaaaannnncccceeeessss, for a single metric are typically
the result of instrumentation being implemented for each instance of a
set of similar components or services in a system, e.g. independent
counts for each CPU, or each process, or each disk, or each system call
type, etc. This multiplicity of values is not enumerated in the name
space but rather, when performance metrics are delivered across the PMAPI
by ppppmmmmFFFFeeeettttcccchhhh(3), the format of the result accommodates values for one or
more instances, with an instance-value pair encoding the metric value for
a particular instance.
The instances are identified by an internal identifier assigned by the
agent responsible for instantiating the values for the associated
performance metric. Each instance identifier has a corresponding
external instance identifier name (an ASCII string). The routines
ppppmmmmGGGGeeeettttIIIInnnnDDDDoooommmm(3), ppppmmmmLLLLooooooookkkkuuuuppppIIIInnnnDDDDoooommmm(3) and ppppmmmmNNNNaaaammmmeeeeIIIInnnnDDDDoooommmm(3) may be used to
enumerate all instance identifiers, and to translate between internal and
external instance identifiers.
All of the instance identifiers for a particular performance metric are
collectively known as an instance domain. Multiple performance metrics
may share the same instance domain.
If only one instance is ever available for a particular performance
metric, the instance identifier in the result from ppppmmmmFFFFeeeettttcccchhhh(3) assumes the
special value PPPPMMMM____IIIINNNN____NNNNUUUULLLLLLLL and may be ignored by the application, and only
one instance-value pair appears in the result for that metric. Under
these circumstances, the associated instance domain (as returned via
ppppmmmmLLLLooooooookkkkuuuuppppDDDDeeeesssscccc(3)) is set to PPPPMMMM____IIIINNNNDDDDOOOOMMMM____NNNNUUUULLLLLLLL to indicate that values for this
metric are singular.
The difficult issue of transient performance metrics (e.g. per-filesystem
information, hot-plug replaceable hardware modules, etc.) means that
repeated requests for the same PMID may return different numbers of
Across the PMAPI, all arguments and results involving a ``list of
something'' are declared to be arrays with an associated argument or
function value to identify the number of elements in the list. This has
been done to avoid both the vvvvaaaarrrraaaarrrrggggssss(3) approach and sentinel-terminated
lists.
Where the size of a result is known at the time of a call, it is the
caller's responsibility to allocate (and possibly free) the storage, and
the called function will assume the result argument is of an appropriate
size. Where a result is of variable size and that size cannot be known
in advance (e.g. for ppppmmmmGGGGeeeettttCCCChhhhiiiillllddddrrrreeeennnn(3), ppppmmmmGGGGeeeettttIIIInnnnDDDDoooommmm(3), ppppmmmmNNNNaaaammmmeeeeIIIInnnnDDDDoooommmm(3),
ppppmmmmNNNNaaaammmmeeeeIIIIDDDD(3), ppppmmmmLLLLooooooookkkkuuuuppppTTTTeeeexxxxtttt(3) and ppppmmmmFFFFeeeettttcccchhhh(3)) the PMAPI implementation